I confess to Hutchinson that if I were a politician, I would be scared to use BattlegroundAI. Generative AI tools are known to âhallucinate,â a polite way of saying that they sometimes make things up out of whole cloth. (They bullshit, to use academic parlance.) I ask how sheâs ensuring that the political content BattlegroundAI generates is accurate.
âNothing is automated,â she replies. Hutchinson notes that BattlegroundAIâs copy is a starting-off point, and that humans from campaigns are meant to review and approve it before it goes out. âYou might not have a lot of time, or a huge team, but youâre definitely reviewing it.â
Of course, thereâs a rising movement opposing how AI companies train their products on art, writing, and other creative work without asking for permission. I ask Hutchinson what sheâd say to people who might oppose how tools like ChatGPT are trained. âThose are incredibly valid concerns,â she says. âWe need to talk to Congress. We need to talk to our elected officials.â
I ask whether BattlegroundAI is looking at offering language models that train on only public domain or licensed data. âAlways open to that,â she says. âWe also need to give folks, especially those who are under time constraints, in resource-constrained environments, the best tools that are available to them, too. We want to have consistent results for users and high-quality informationâso the more models that are available, I think the better for everybody.â
And how would Hutchinson respond to people in the progressive movementâwho generally align themselves with the labor movementâobjecting to automating ad copywriting? âObviously valid concerns,â she says. âFears that come with the advent of any new technologyâweâre afraid of the computer, of the light bulb.â
Hutchinson lays out her stance: She doesnât see this as a replacement for human labor so much as a way to reduce grunt work. âI worked in advertising for a very long time, and there’s so many elements of it that are repetitive, that are honestly draining of creativity,â she says. âAI takes away the boring elements.â She sees BattlegroundAI as a helpmeet for overstretched and underfunded teams.
Taylor Coots, a Kentucky-based political strategist who recently began using the service, describes it as âvery sophisticated,â and says it helps identify groups of target voters and ways to tailor messaging to reach them in a way that would otherwise be difficult for small campaigns. In battleground races in gerrymandered districts, where progressive candidates are major underdogs, budgets are tight. âWe donât have millions of dollars,â he says. âAny opportunities we have for efficiencies, weâre looking for those.â
Will voters care if the writing in digital political ads they see is generated with the help of AI? âI’m not sure there is anything more unethical about having AI generate content than there is having unnamed staff or interns generate content,â says Peter Loge, an associate professor and program director at George Washington University who founded a project on ethics in political communication.
âIf one could mandate that all political writing done with the help of AI be disclosed, then logically you would have to mandate that all political writingââsuch as emails, ads, and op-edsâânot done by the candidate be disclosed,â he adds.
Still, Loge has concerns about what AI does to public trust on a macro level, and how it might impact the way people respond to political messaging going forward. âOne risk of AI is less what the technology does, and more how people feel about what it does,â he says. âPeople have been faking images and making stuff up for as long as we’ve had politics. The recent attention on generative AI has increased peoples’ already incredibly high levels of cynicism and distrust. If everything can be fake, then maybe nothing is true.â
Hutchinson, meanwhile, is focused on her companyâs shorter-term impact. âWe really want to help people now,â she says. âWeâre trying to move as fast as we can.â
+ There are no comments
Add yours